专利摘要:
A storage device may include a plurality of memory devices logically divided into a plurality of blocks and a controller. In some examples, the controller may be configured to determine a respective fill percentage for each respective block of the plurality of blocks; determining the lowest fill percentage for the plurality of respective fill percentages; and in response to determining that the least fill percentage exceeds a predetermined threshold value, performing an action relating to the good operating condition of the storage device.
公开号:FR3026513A1
申请号:FR1559020
申请日:2015-09-24
公开日:2016-04-01
发明作者:Haining Liu
申请人:HGST Netherlands BV;
IPC主号:
专利说明:

[0001] DIAGNOSIS FOR DETERMINING THE CORRECT OPERATING STATE OF A STORAGE DEVICE TECHNICAL FIELD [1] The present disclosure relates to storage devices, such as electronic disks.  BACKGROUND [2] Electronic disks (SSDs) can be used in computers, in applications requiring relatively low latency and high capacity storage.  For example, SSDs may have lower latency, especially for random reads and writes, than hard disk drives (HDDs).  This may allow better processing capability for random reads and writes to / from SSD compared to HDD.  In addition, SSDs can use multiple parallel data channels to read and write to / from memory devices, resulting in high sequential read and write speeds.  [3] SSDs may use nonvolatile memory devices, such as flash memory devices, that continue to store data without requiring a persistent or periodic power supply.  The flash memory devices are written and erased by applying a voltage to the memory cell.  The voltage used to erase the flash memory devices may be relatively high, and may cause physical changes of the flash memory cell on multiple erase operations.  Therefore, flash memory cells may wear out after several erasure operations, reducing their charge storage capacity, and decreasing or eliminating the ability to write new data into the flash memory cells.  SUMMARY [4] In some examples, the description presents a storage device that includes a plurality of memory devices logically divided into a plurality of blocks and a controller.  In some examples, the controller may be configured to determine a respective fill percentage for each respective block of the plurality of blocks; Determining the least fill percentage for the plurality of respective fill percentages; and in response to determining that the least fill percentage exceeds a predetermined threshold value, performing an action regarding the good operating condition of the storage device.  [5] In some examples, the description provides a method of determining, by a controller of a storage device, a respective fill percentage for each respective block of a plurality of blocks of the storage device; determining, by the controller, the lowest fill percentage for the plurality of respective fill percentages; and in response to determining that the least fill percentage exceeds a predetermined threshold value, performing, by the controller, an action regarding the good operating condition of the storage device.  [6] In some examples, the description presents a computer-readable storage medium having instructions which, when executed, configure one or more processors of a storage device to determine a respective fill percentage for each respective block. a plurality of blocks of the storage device; determining the lowest fill percentage for the plurality of respective fill percentages; and, in response to determining that the least fill percentage exceeds a predetermined threshold value, performing an action regarding the good operating condition of the storage device.  [7] In some examples, the description provides a system comprising means for determining a respective fill percentage for each respective one of a plurality of blocks of the storage device.  The system may also include means for determining the lowest fill percentage for the plurality of respective fill percentages.  According to these examples, the system may also comprise means for executing, in response to the determination that the small percentage of filling exceeds a predetermined threshold value, an action concerning the good state of operation of the storage device.  [8] The details of one or more examples are illustrated in the accompanying drawings and in the description below.  Other features, objects and advantages will become more apparent upon reading the description and drawings, as well as reading the claims.  BRIEF DESCRIPTION OF THE DRAWINGS Fig. 1 is a conceptual and simplified block diagram illustrating an exemplary system including a storage device connected to a host device.  [10] Fig. 2 is a conceptual block diagram illustrating an example of a memory device 12AA which has a plurality of blocks, each block containing a plurality of pages.  [11] Fig. 3 is a conceptual and simplified block diagram illustrating an exemplary controller.  [12] Fig. 4 is an exemplary diagram illustrating block distribution as a function of fill percentage during continuous operation of a storage device.  [13] Figure 5 is a diagram of the write amplification factor as a ratio of over-provisioning rate, p, for an exemplary storage device.  [14] Figure 6 is a diagram of a worst-case memory recovery start against an over-provisioning rate for a given storage device by way of example.  [15] Fig. 7 is a flowchart illustrating an exemplary technique for estimating a good operating condition of a storage device based on a smaller percentage fill value.  [16] Fig. 8 is a diagram of the start of recovery of the average memory space against a rate of over-provisioning for an exemplary storage device.  DETAILED DESCRIPTION [17] The description provides a technique for tracking the good state of operation of a storage device, such as an electronic disk (SSD).  In some examples, the technique may use one or more parameters recorded by a controller of the storage device during recovery operations of the memory space of the storage device.  A storage device may include memory devices that each include a plurality of blocks, each block containing memory cells that store data.  A controller of a storage device may determine a fill percentage for each block of the storage device while performing a memory space recovery.  The fill percentage is a ratio of the valid data stored in a block to the total capacity of the block.  The controller can then determine the smallest fill percentage on all blocks.  The controller can compare this smaller fill percentage to a predetermined threshold value.  If the smallest fill percentage is greater than the threshold value, this may indicate that the good operating condition of the storage device is deteriorating.  The good state of operation of the storage device may be associated with a number of free blocks in the storage device.  If the number of free blocks becomes low, the controller may be unable to perform sufficient wear distribution and memory recovery to maintain the performance of the storage device.  If the number of free blocks is too small, the controller will not always be able to recover lost performance or new data may not be written to the storage device.  Thus, it is important for an extended operation of the storage device to determine when the good operating condition of the storage device deteriorates before reaching a state of no recovery.  [18] In some examples, for each memory space recovery operation, the controller can determine the smallest fill percentage.  The controller may then determine a smaller average fill percentage for a predetermined number of the last smaller fill percentages.  By averaging the smaller fill percentages for each of a plurality of recent memory space reclamation operations, the controller can smooth the value of the smallest fill percentage over the plurality of the last memory space recovery operations. .  The controller can then compare this smaller average fill percentage to the threshold value.  [19] The controller may perform one or more predetermined operations in response to determining that the lowest average fill percentage is greater than the threshold value.  For example, the controller may modify an operating parameter or provide an indication of the good state of operation of the storage device to a host device of the storage device.  [20] In this way, the controller of the storage device can control the good state of operation of the storage device during execution, using a single value or an average of a single value on a plurality of operations.  This simple value or average of a single value is then compared to a threshold value to make a determination of the good operating condition of the storage device.  Thus, the control of the good state of operation of the storage device can be relatively light, with a small servitude.  In addition, the techniques described here use a parameter recorded during the recovery of the memory space.  [21] Comparatively, some techniques for estimating the good state of operation of a storage device are implemented by a host device and require the host device to compute a write amplification factor based on statistics. storage device (such as the amount of data that the host device has directed the storage device to write and the amount of data that the storage device has actually written, including written data during writes, memory space, a distribution of wear, and others).  The host device then estimates the good operating condition of the storage device based on the calculated write amplification factor.  The collection of raw statistics of the storage device and the calculation of the write amplification factor can introduce a delay time in the determination of the good operating state of the storage device, which can delay an intervention if the good state of operation of the storage device has deteriorated.  In some instances, the delay time may be so severe that the storage device enters a state from which it can not recover.  Unlike these techniques, those described herein do not always require the determination of a write amplification factor based on actual write statistics and / or may be implemented by the controller of the storage device rather than by the host device.  Therefore, the techniques described herein may allow earlier detection or prediction of deterioration of the good operating condition of the storage device, and may reduce or eliminate the probability that the storage device will enter a state from which it can not recover.  [22] Fig. 1 is a conceptual and simplified block diagram illustrating an exemplary system comprising a storage device 2 connected to a host device 15.  The host device 15 may use nonvolatile memory devices included in the storage device 2 for storing and retrieving data.  The host device 15 may comprise any computing device, including, for example, a computer server, a network storage unit (NAS), a desktop computer, a notebook (for example, a laptop computer), a tablet, a decoder, a mobile computing device such as a smartphone, a television, a camera, a display device, a digital media player, a video game console, a streaming video device, or other.  [23] As illustrated in FIG. 1, the storage device 2 may comprise a controller 4, a non-volatile memory array 6 (NVMA 6), a cache memory 8, and an interface 10.  In some examples, the storage device 2 may comprise additional components not shown in FIG. 1, for the sake of clarity.  For example, the storage device 2 may comprise power transmission components, in particular, for example, a capacitor, a super capacitor, or a battery; a printed circuit board (PB) to which components of the storage device 2 are mechanically attached and which comprises electrically conductive traces which electrically interconnect the components of the storage device 2; or others.  In some examples, the storage device 2 comprises an electronic disk (SSD).  [24] The storage device 2 may include an interface 10 for interfacing with the host device 15.  The interface 10 may provide a mechanical connection, and an electrical connection, or both, to the host device 15.  The interface 10 may include one or two of a data bus for exchanging data with the host device and a control bus for exchanging commands with the host device 15.  The interface 10 may operate according to any suitable protocol.  For example, the interface 10 may operate according to one or more of the following protocols: Advanced Technology Attachment 20 (ATA) (eg Serial ATA (SATA, and Parallel ATA (PATA)), Fiber Channel, SCSI interface, SCSI connection serial (SAS), Peripheral Component Interconnect (PCI), and PCI Express.  The electrical connection of the interface 10 (e.g., the data bus, the control bus, or both) is electrically connected to the controller 4, providing an electrical connection between the host device 15 and the controller, allowing a data exchange between the host device 15 and the controller 4.  [25] The storage device 2 may include an NVMA 6 which may include a plurality of memory devices 12AA-12NN (collectively, "memory devices 12") which may each be configured to store and / or retrieve data.  For example, a memory device of the memory devices 12 may receive data and a message 30 from the controller 4 which instructs the memory device to store the data.  Similarly, the memory device of the memory devices 12 may receive a message from the controller 4 which instructs the memory device to retrieve data.  In some examples, each memory device 12 may be configured to store relatively large amounts of data (e.g., 128MB, 256MB, 512MB, 1Gb, 2Gb, 4Gb, 8Gb, 16Gb, 32Gb, 64Gb, 128Gb, 256Gb, 512Gb, 1To, etc. ).  [26] In some examples, memory devices 12 may include flash memory devices.  Flash memory devices may include NAND or NOR based flash memory devices, and may store data based on a charge contained in a floating gate of a transistor for each flash memory cell.  In NAND flash memory devices, the flash memory device can be divided into a plurality of blocks.  Fig. 2 is a conceptual block diagram illustrating an exemplary memory device 12AA which contains a plurality of blocks 16A-16N (collectively, "blocks 16"), each block having a plurality of pages 18AA-18NM (collectively, from "Pages 18").  Each block of blocks 16 may include a plurality of NAND cells.  The rows of NAND cells can be electrically connected in a serial manner using a word line to define a page (a page of pages 18).  Respective cells in each of a plurality of pages 18 may be electrically connected to respective bit lines.  The controller 4 can write and read data in / from page-level NAND flash memory devices and erase data from the block-level NAND flash memory devices.  [27] In some examples, it turns out that it is not practical for the controller 4 to be separately connected to each memory device 12 of the memory devices 12.  Therefore, the connections between the memory devices 12 and the controller 4 can be multiplexed.  By way of example, memory devices 12 may be grouped into channels 14A-14N (collectively, in "channels 14").  For example, as illustrated in FIG. 1, memory devices 12AA-12AN may be grouped into first channels 14A, and memory devices 12NA-12NN may be grouped into Nth channels 14N.  The memory devices 12 grouped in each of the channels 14 can share one or more connections to the controller 4.  For example, the memory devices 12 grouped in each of the first channels 14A can be attached to a common I / O bus and to a common control bus.  The storage device 2 may comprise a common I / O bus and a common control bus for each respective channel 14.  In some examples, each of the channels 14 may contain a set of circuit enable lines (CE) that may be used to multiplex memory devices on each channel.  For example, each EC line may be connected to a respective memory device of the memory devices 12.  In this way, the number of separate connections between memory devices 12 can be reduced.  In addition, since each channel has an independent set of connections to the controller 4, the reduction in connections may not significantly affect the data rate because the controller 4 may simultaneously send different commands to each channel.  [28] In some examples, the storage device 2 may include a number of memory devices 12 selected to provide a total capacity that is greater than the capacity accessible to the host device 15.  This is called over-provisioning.  For example, if it is indicated that the storage device 2 comprises 240 GB of storage capacity accessible to the user, the storage device 2 may comprise a number of memory devices 12 sufficient to give a total storage capacity of 256 GB.  The 16 GB of the storage devices 12 are not always accessible to the host device 15 or to a user of the host device 15.  The additional storage devices 12 may provide, instead, additional blocks 16 to facilitate writing, recovery of memory space, distribution of wear, and the like.  In addition, the additional storage devices 12 can provide additional blocks 16 that can be used if some worn blocks become unusable and are decommissioned.  The presence of the additional blocks 16 may allow the worn blocks to be taken out of service without causing a change in the available storage capacity for the host device 15.  In some examples, the amount of over-provisioning can be defined as p = (TD) / D, where p is the over-provisioning rate, T is the total number of blocks of the storage device 2, and D is the number blocks of the storage device 2 that are accessible to the host device 15.  [29] The storage device 2 comprises a controller 4, which can handle one or more operations of the storage device 2.  Fig. 3 is a conceptual and simplified block diagram illustrating an exemplary controller 20, which may be an exemplary controller 4 in Fig. 1.  In some examples, the controller 20 may include a write module 22, a maintenance module 24, a read module 26, a plurality of channel controllers 28 A-28N (collectively, "channel controllers 28"), and an address translation module 20.  In other examples, the controller 20 may comprise additional modules or hardware units, or may include a smaller number of modules or hardware units.  The controller 20 may contain a microprocessor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), or other digital logic circuitry.  [30] The controller 20 can interface with the host device 15 through the interface 10 and manage the storage and retrieval of data on / from the memory devices 12.  For example, the write module 22 of the controller 20 can handle writes to the memory devices 12.  For example, the write module 22 may receive a message from the host device 15 through the interface 10 that directs the storage device 2 to store data associated with a logical address and the data.  The write module 22 can manage the writing of the data in the memory devices 12.  [31] For example, the write module 22 may communicate with the address translation module 20, which manages translation between logical addresses used by the host device, to manage data storage locations and addresses. physical blocks used by the write module 22 to control the writing of data in memory devices.  The address translation module 30 of the controller 20 may use a flash translation layer or table that translates logical addresses (or logical block addresses) of data stored by memory devices 12 into physical data block addresses. stored by memory devices 12.  For example, the host device 15 can use the logical block addresses of data stored by memory devices 12 in instructions or messages for storage device 2, while the write module 22 uses physical block addresses. data for controlling the writing of data in memory devices 12.  (Similarly, the read module 26 may use physical block addresses to control the reading of data from memory devices 12).  The physical block addresses correspond to real physical blocks (for example, the blocks 16 of FIG. 2) of the memory devices 12.  [32] In this manner, the host device 15 may be allowed to use a static logical block address for a certain data set, while the physical block address at which the data is actually stored may change.  The address translation module 20 can maintain the flash translation table or table to map the addresses of 9 logical blocks to the physical block addresses to allow the use of the static logical block address by the host device , while the physical block address of the data can change, for example because of a distribution of wear, a recovery of the memory space, or others.  [33] As explained above, the write module 24 of the controller 20 can perform one or more operations to manage the writing of data in memory devices 16.  For example, the write module 24 can handle the writing of data in memory devices 16 by selecting one or more blocks within the memory devices 16 to store the data, and bringing the memory devices into storage. memory devices 16 that contain the selected blocks to actually store the data.  As described above, the write module 24 may cause the address translation module 30 to update the flash translation layer or table according to the selected blocks.  For example, the write module 24 may receive a message from the host device 4 that includes a data unit and a logical block address, select a block within a particular memory device memory devices 16 for storing the data, causing the particular memory device of the memory devices 16 to effectively store the data (e.g., through a channel controller of the channel controllers 28A-28N which corresponds to the particular memory device ), and causing the address translation module 30 to update the flash translation layer or table to indicate that the logical block address corresponds to the selected block within the particular memory device.  [34] The flash translation layer or table may also facilitate the splitting of data received from the host device 15 onto a plurality of physical block addresses.  For example, in some examples, the data received from the host device may be in units that are larger than a single block.  Thus, the controller 20 can select multiple blocks to each store a portion of the data unit.  Instead of selecting multiple blocks within a single memory device 12 from the storage devices for storing the portions of the data unit, the controller 20 can select blocks from a plurality of memory devices 12 to store the parts of the data unit.  The controller 20 may then cause the plurality of memory devices 12 to store the portions of the data unit in parallel.  In this manner, the controller 20 can increase the rate at which data can be stored in memory devices 12 by writing portions of the data to different memory devices 12, for example connected to different channels 14.  [35] In order to write a bit with a logical value 0 (load) on a bit with a previous logic value 1 (not loaded), a large amount of current is used, which can cause accidental changes in the load of the loads. adjacent flash memory cells.  To provide protection against accidental changes, an entire block of flash memory cells may be erased to a logical value 1 (not loaded) before data is written to cells within the block.  Therefore, flash memory cells can be deleted at the block level and written at the page level.  [36] Thus, even for writing a quantity of data that would use less than one page, the controller 20 may cause the erasure of an entire block.  This can lead to write amplification, which refers to the ratio between the amount of data received from host device 14 to be written into memory devices 12 and the amount of data actually written into memory devices 12.  Write amplification contributes to faster wear of the flash memory cells than when there is no write amplification.  Flash memory cell wear can occur when flash memory cells are erased due to the relatively high voltages used to erase the flash memory cells.  Over a plurality of erasure cycles, relatively high voltages can cause physical changes in the flash memory cells.  Finally, flash memory cells can wear out, so much so that data can no longer be written to the cells.  [37] One technique that the controller 20 can implement to reduce write amplification and wear of the flash memory cells is to write data received from the host device 14 into unused (or free) blocks. (for example, blocks 16 of Figure 2) or in partially used blocks.  For example, if the host device 14 sends data to the storage device 2 that changes little with respect to the data already stored by the storage device 2, the controller 20 can mark the old data as out-of-date or invalid, and write the data. new data in unused block (s).  Over time, this can reduce a number of erasure operations to which blocks are exposed, compared with erasing the block that contains the old data and writing the updated data in the same block.  [38] Since empty (or free) blocks do not contain existing data, free blocks can be written without first erasing data stored by the block.  Thus, writing in free blocks may be faster than writing in previously full or partially full blocks.  Writing in free blocks can also reduce the wear of the memory devices 12 due to the lack of erase operation before the write operation.  In some examples, the storage device 2 may comprise a number of memory devices 12 sufficient to maintain a reserve of free blocks, F, to maintain write speeds and to reduce the wear of the memory devices 12.  The number of free blocks, F, is associated with the over-provisioning rate, p, because the number of free blocks, F, is equal to the number of blocks accessible by the user, D, subtracted from the total number of blocks , T (F = TD).  [39] In response to receiving a write command from the host device 15, the write module 22 can determine in which physical locations (blocks 16) memory devices 12 it can write the data.  For example, the write module 22 may request from the address translation module 30 or the maintenance module 24 one or more physical block addresses that are empty (for example, do not store data), partially empty (for example). for example, only a few pages of the block store data), or store at least some invalid (or outdated) data.  Upon receipt of one or more physical block addresses, the write module 22 may communicate a message to one or more of the channel controllers 28A-28N (collectively, "channel controllers 28"), which brings the controllers of channels 28 to write the data in the selected blocks.  [40] The read module 26 can similarly control the reading of data from the memory devices 12.  For example, the read module 26 may receive a message from the host device requesting data with an associated logical block address.  The address translation module 30 can convert the logical block address into a physical block address using the flash translation layer or table.  The read module 26 can then control one or more of the channel controllers 28 to recover the data from the physical block addresses.  [41] Each channel controller of the channel controllers 28 can be connected to a respective channel of the channels 14.  In some examples, the controller 20 may have the same number of channel controllers 28 as the number of channels 14 of the storage device 2.  The 12 channel controllers 28 can perform tight control over the addressing, programming (or writing), deletion, and reading of the memory devices 12 connected to the respective channels, for example under the control of the writing module 22, the reading module 26, and / or the maintenance module 24.  [42] The maintenance module 24 may be configured to perform operations relating to maintaining the performance and extending the lifetime of the storage device 2 (e.g., memory devices 12).  For example, the maintenance module 24 can implement at least one of a division of wear or a recovery of the memory space.  [43] As described above, flash memory cell erasure can use relatively high voltages which, over a plurality of erase operations, can cause changes in the flash memory cells.  After a number of erasure operations, flash memory cells may degrade to such an extent that data can no longer be written to the flash memory cells, and a block (for example, block 16 of Figure 2) containing these cells may be disabled (no longer used by the controller to store data).  To increase the amount of data that can be written into memory devices 12 before blocks are worn out and taken out of service, the maintenance module 24 can implement a distribution of wear.  [44] In a wear distribution, the maintenance module 24 can follow a number of deletions of a block or a group of blocks, or of writes in a block or in a group of blocks, for each block or group of blocks.  The maintenance module 24 may cause inbound data to be written from the host device 15 into a block or group of blocks that has undergone relatively fewer writes or erasures, in an attempt to maintain an approximately equal number of records. writes or deletes for each block or group of blocks.  In this way, each block of the memory devices 12 can wear out at practically the same rate, thus increasing the lifetime of the storage device 2.  [45] Although write amplification and wear of flash memory cells can be reduced by decreasing a number of erasures and writes of data in different blocks, this can also produce blocks with valid data ( news) and invalid (outdated) data.  To solve this problem, the controller 20 can implement a recovery of the memory space.  In a memory space recovery operation, the controller 20 may analyze the contents of the blocks of the memory devices 12 to determine a block that contains a high percentage of invalid (out-of-date) data.  The controller 20 can then rewrite the valid data from the block to a different block, and then clear the block.  This can reduce an amount of invalid (stale) data stored by memory devices 12 and increase a number of free blocks, but also increase write amplification and wear of memory devices 12.  [46] According to one or more examples of this description, the storage device 2 (for example, the maintenance module 24) can control a good operating state of the storage device 2 by using at least one parameter determined during operations. recovery of the memory space.  For example, for each recovery operation of the memory space, the maintenance module 24 can determine a percentage of filling for each block of storage devices 12.  The fill percentage is the ratio of the amount of valid data stored by the block to the total capacity of the block.  A low fill percentage indicates that the block stores a small percentage of valid data.  For example, a fill percentage of 0% indicates that the block does not store any valid data and a fill percentage of 100% indicates that all data stored by the block is valid.  The maintenance module 24 can use the fill percentages during memory retrieval to determine which blocks contain valid data and invalid data, and can redistribute the data within the storage devices to combine data with the data. from partially filled blocks, to release empty blocks.  The smallest fill percentage is referred to here as "C".  [47] Fig. 4 is an exemplary diagram illustrating block distribution as a function of the percentage of fill during continuous operation of a storage device.  As shown in FIG. 4, the number of blocks with a given fill percentage can follow a unimodal distribution profile, with a relatively longer drag towards the lower fill percentages.  The leftmost point on the curve, the closest to 0, defines the smallest fill percentage "C".  During operation in a continuous state, the value of C may remain relatively constant, from one recovery operation of the memory space to the next.  This can be predicted by assuming that the total capacity of a single logical block is N, and that the smallest percentage of fill observed from the previous sampling of fill percentages is C.  With these 14 assumptions, the drop in a block filling in the worst case can be expressed by (N / (TF)) / N = 1 / (TF) zV, which indicates that one can predict a value of C virtually identical from one sampling to the next.  However, the value of C may slowly increase over time as a number of free blocks of the storage device 2 decreases.  [48] As described above, when the blocks are worn, replacement blocks are extracted from the free block pool to replace the used blocks.  This reduces the number of free blocks.  Thus, as the number of used blocks increases, the number of free blocks decreases proportionally.  Fig. 5 is a diagram of the write amplification factor as a ratio of over-provisioning rate, p, for an exemplary storage device.  As shown in Fig. 5, as the over-provisioning rate decreases (indicating fewer available total blocks and less free blocks), the write boost factor increases.  If the over-provisioning rate (and the number of free blocks) becomes too low, a distribution of wear and recovery of memory space may be inefficient to maintain performance of the storage device 2, and its performance may not recover.  Therefore, it is important for an extended operation of the storage device 2 to determine when the good state of operation of the storage device 2 deteriorates before reaching a state of no recovery.  The relationship of the worst case write amplification factor, W, with respect to the over-provisioning rate can be approximated as W - = (1 + p) / (2 * p.  [49] Fig. 6 is a diagram of a worst-case memory recovery start compared to an over-provisioning rate for an exemplary storage device.  As shown in FIG. 6, the worst-case memory recovery start increases as the over-provisioning rate decreases, and generally correlates to the write boost factor (shown in FIG. 5). .  The worst-case recovery of the memory space, shown in Figure 6, is correlated with the lowest percentage of fill, C.  For example, the value C is equal to the value of the start of recovery of the memory space in the worst case, divided by the total number of the number of blocks.  In the example of Figure 6, the total number of blocks was 16. 384, the value C being therefore the value of the y-axis represented in FIG. 6 divided by 16. 384.  150] The maintenance module 24 can use the smallest fill percentage, C, to estimate the good operating state of the storage device 2 (for example, the memory devices 12).  The maintenance module 24 can compare the smallest fill percentage, C, with a threshold value to determine whether the good operating state of the storage device 2 (for example, memory devices 12) is deteriorating.  For example, site smaller filling percentage, C, is greater than the threshold value, maintenance module 24 can determine that the good operating state of storage device 2 (for example, memory devices 12) is deteriorating .  [51] The threshold value may be predetermined, and may be selected based on a determination or estimate of when the good operating condition of the storage device 2 deteriorates to such an extent that a predetermined action must be taken. be performed before the deterioration increases and the performance of the storage device is unrecoverable.  For example, the threshold value, TV, can be set as TV = (W-1) / W, where W is the write amplification factor in the worst case.  In response to the determination that C exceeds TV, the maintenance module 24 may perform at least one predetermined function, such as modifying an operating parameter of the controller 20 or providing a message or indication to the host device 15.  [52] For example, the maintenance module 24 can modify an operating parameter of the controller 20, such as suspending any subsequent writing in the storage device 2.  The maintenance module 24 may also send an indication to the host device 15 that the storage device 2 is a read-only storage device, so that the host device 15 will no longer transmit write commands to the storage device 2 .  By converting the storage device 2 into a read-only device, a loss of the data stored in the storage device 2 can be avoided.  As another example, the controller 20 can reduce (slow down) write speeds in the storage devices 12 of the storage device 2, which allows the controller 20 more time to perform a memory space recovery.  [53] In some examples, the predetermined operation may consist in providing, by the maintenance module 24, an indication of the good state of operation of the storage device 2 to the host device 12.  The host device 15 may then optionally perform one or more predetermined actions in response to receipt of the indication.  For example, the host device 15 may be configured to slow (reduce) write instructions in the storage device 2, leaving the controller 20 of the storage device 2 longer to perform space recovery operations. memory.  As another example, the host device 15 may be configured to change an identification of the storage device 2 read-only, so that the host device 15 no longer sends write instructions to the storage device 2.  As an additional example, the host device 15 may be configured to change a capacity of the storage device 2 accessible to the host device 15, effectively increasing the number of free blocks available to the controller 20 of the storage device 2, with which it performs a distribution of wear and recovery of memory space.  [54] In some examples, the maintenance module 24 may use a plurality of smaller fill percentage values, one for each of a plurality of memory space recovery operations, to determine a smaller value of average filling percentage.  Fig. 7 is a flowchart illustrating an exemplary technique for estimating a good operating condition of a storage device as a function of a smaller percentage fill value.  The technique of FIG. 7 will be described with a simultaneous reference to the storage device 2 and to the controller 20 of FIGS. 1 and 3, to facilitate the description.  However, in other examples, a different storage device or a controller can implement the technique of FIG. 7, the storage device 2 and the controller 20 of FIGS. 1 and 2 being able to implement a different technique allowing to estimate a good operating condition of the storage device 2, or both.  [55] The technique of Figure 7 may consist in determining, by the maintenance module 24, the smallest percentage of filling, C, on all blocks (42).  As described above, the maintenance module 24 can determine a respective fill percentage for each block of the memory devices 12 as part of a memory space recovery operation.  [56] The technique of FIG. 7 also consists in determining, by the maintenance module 24, a smaller value of average filling percentage, a mean C, for a plurality of operations for recovering the memory space (44). ).  The maintenance module 24 can determine a value C for each recovery operation of the memory space.  The maintenance module 24 can cause the storage of the value C, for example in a buffer memory of value C or in another data structure.  In some examples, the C buffer or other data structure may have N entries, a C value for each of the last N memory space retrieval operations (including the space retrieval operation). current memory).  When the maintenance module 24 performs a memory space recovery operation and determines a value C for the memory space recovery operation, the last value C can be placed in the input 0 of the memory buffer. C value or in another data structure, and each value previously stored in the C value buffer or another data structure can be incremented by one value (for example, the previous input 0 becomes the input 1 , the previous entry Nm becomes the input N-m + 1).  The N-1 entry, the final entry in the C value buffer or another data structure is rejected.  In this manner, the C buffer or other data structure stores the C values for the last N memory space recovery operations.  [57] N, the size of the C-value buffer or other data structure may be selected so that the average of the N values C smoothes the C values, reducing the noise in the C-value signal.  Additionally, N can be selected so that the average C value incorporates only relatively recent C values.  In some examples, N can be 1024.  The maintenance module 24 determines the smallest average fill percentage value (the average C value) (44) using the N values C stored in the value buffer C or in another data structure.  In this way, the average value C may be a moving average of the C values for the last N memory space recovery operations.  [58] Fig. 8 is a diagram of the start of recovery of the average memory space with respect to an over-provisioning rate for an exemplary storage device.  As shown in FIG. 8, the start of recovery of the average memory space increases as the over-provisioning rate decreases, and generally correlates with the write amplification factor (shown in FIG. 5).  The start of recovery of the average storage space shown in Figure 8 correlates to the smallest average fill percentage, the average C.  For example, the average value C is equal to the value of the start of recovery of the memory space in the worst case, divided by the total number of blocks.  In the example of Figure 8, the total number of blocks was 16. 384, so that the average value C is the value of the y-axis shown in Figure 8 divided by 16. 384.  [59] The maintenance module 24 then compares the average value C with a predetermined threshold value (46).  The threshold value may be predetermined, and may be selected based on a determination or estimate of when the good operating condition of the storage device 2 deteriorates to such an extent that a predetermined action must be performed. before the deterioration increases and the performance of the storage device is unrecoverable.  For example, the threshold value, TV, can be set as TV = (W-1) / W.  [60] In response to determining that the average value C exceeds TV (the "YES" branch of the decision block 48), the maintenance module 24 can perform at least one predetermined function (50).  The predetermined function may relate to the good state of operation of the storage device 2 (for example, memory devices 12), when the average value C exceeding the TV indicates that the good operating condition of the memory devices 12 is deteriorating.  For example, the maintenance module 24 may provide a message that causes the controller 20 to modify an operating parameter or provide a message or instruction that is communicated to the host device 15.  For example, the controller 20 can change an operating parameter by slowing down a response to a write request received from the host device 15 to allow the storage device 2 (for example, the controller 20) to maintain operations. writing and retrieving the memory space according to the current (lower) over-provisioning rate, or converting the storage device 2 to a read-only device to prevent data loss.  As another example, the host device 15 can modify its operation by reducing a write processing capability (for example, write instructions) in the storage device 2.  [61] However, in response to the determination that the average C value does not exceed TV (the "NO" branch of decision block 48), the maintenance module 24 can begin the technique of FIG. 7 again, and can determining a value C based on the performance of the recovery operation of the next memory space (42).  The maintenance module 24 can add this new value C to the value buffer C or another data structure and reject the old value C from the value buffer C or another data structure.  The maintenance module 24 can determine the average value C (44) using the N values C stored in the value buffer C or in another data structure and compare the average value C with the TV (46).  [62] In some examples, the maintenance module 24 may repeat this technique each time a memory space recovery operation is executed, until the average value C exceeds the TV, the module maintenance 24 then executing the predetermined action 19 regarding the good state of operation of the storage device 2 (for example, memory devices 12) (50).  In other examples, the maintenance module 24 may repeat this technique periodically, for example after a predetermined number of memory space recovery operations, and not after each recovery operation of the memory space.  [63] In this manner, the controller of the storage device can control the good state of operation of the storage device during execution, by using a single value C or an average value C over a plurality of recovery operations of the storage device. the memory space.  This single value C or this average value C is then compared with a threshold value (TV) to make a determination of the good operating state of the storage device 2 (for example, memory devices 12).  Thus, the technique of controlling the good state of operation of the storage device can be relatively light and have a low level of servitude.  In addition, the techniques for controlling the good state of operation of a storage device described herein use a parameter tracked during memory space recovery.
[0002] The techniques described herein do not always require the determination of a write amplification factor on the basis of effective write statistics, may allow early detection or prediction of the deterioration of the good operating state of a device storage, and / or can be implemented by the controller of the storage device rather than by the host device. [64] The techniques presented in this description can be implemented, at least in part, in hardware forms. , software, firmware, or any combination thereof. For example, various aspects of the described techniques may be implemented within one or more processors, including one or more microprocessors, digital signal processors (DSPs), application-specific integrated circuits (ASICs). , User Programmable Gate Array (FPGA), or any other equivalent integrated or discrete logic circuitry, as well as any combination of such components. The term "processor" or "processing circuitry" may generally designate any of the foregoing logic circuits, alone or in combination with another logic circuitry, or any other equivalent circuitry. A control unit with hardware may also perform one or more of the techniques of this description. [65] Such hardware, software, and firmware may be implemented within the same device or within separate devices to support the various techniques described herein. In addition, any one or any of the described units, modules, or components may be implemented with others or separately as discrete but otherwise operable logical devices. The description of the different features in the form of modules or units is intended to emphasize various functional aspects and does not necessarily imply that such modules or units are made by separate hardware, software or firmware components. The functionality associated with one or more modules or one or more units may be performed by separate hardware, software, or firmware components, or integrated within common or separate hardware, software, or firmware components. [66] The techniques described in this specification may also be incorporated or encoded in an article of manufacture comprising a computer readable storage medium encoded with instructions. Instructions embedded or encoded in an article of manufacture having an encoded computer readable storage medium may cause one or more programmable processors, or other processors, to carry out one or more of the techniques described herein, for example when instructions included or encoded in the computer-readable storage medium are executed by one or more processors. Computer readable storage media may include random access memory (RAM), read only memory (ROM), programmable read only memory (PROM), erasable programmable read only memory (EPROM), read-only memory electronically erasable programmable program (EEPROM), flash memory, hard disk, CD-ROM, floppy disk, cassette, magnetic media, optical media, or other computer-readable media. In some examples, an article of manufacture may include one or more computer readable storage media. [67] In some examples, a computer-readable storage medium may include non-transitory support. The term "non-transient" may indicate that the storage medium is not embedded in a carrier wave or in a propagated signal. In some examples, a non-transitory storage medium may store data that changes over time (for example, in a RAM or cache memory).
[0003] 21 81 Various examples have been described. These and other examples fall within the scope of the following claims. 22
权利要求:
Claims (20)
[0001]
REVENDICATIONS1. A storage device comprising: a plurality of memory devices logically divided into a plurality of blocks; and a controller configured to: determine a respective fill percentage for each respective block of the plurality of blocks; determining the lowest fill percentage for the plurality of respective fill percentages; and in response to determining that the least fill percentage exceeds a predetermined threshold value, performing an action relating to the good operating condition of the storage device.
[0002]
2. Storage device according to claim 1, wherein the controller is configured to perform the action relating to the good state of operation of the storage device by modifying at least one operating parameter of the controller.
[0003]
The storage device of claim 2, wherein the controller is configured to change the operating parameter of the controller by putting at least the storage device in a read-only state.
[0004]
The storage device according to claim 1, wherein the controller is configured to perform the action relating to the good operating condition of the storage device at least by providing an indication to a host device that causes the host device to modify at least one 25 an operating parameter.
[0005]
The storage device of claim 1, wherein the controller is configured for each respective memory space recovery operation of a plurality of memory space recovery operations to: determine a percentage of respective filling for each block of the plurality of blocks of the storage device; 23determining the smallest respective fill percentage for the plurality of respective fill percentages associated with the respective memory space retrieval operation; determining a smaller average fill percentage based on the smaller respective fill percentages; and comparing the lowest average fill percentage to the predetermined threshold value.
[0006]
The storage device of claim 1, wherein the controller is further configured to: determine a smaller average fill percentage based on the smaller fill percentage and the smaller fill percentages for each of a plurality memory space recovery operations executed previously, and in response to the determination that the smallest average fill percentage exceeds the predetermined threshold value, perform the action relating to the good operating condition of the storage device.
[0007]
7. Storage device according to claim 1, wherein the equal threshold value (W-1) / W, W = (1 + p) / 2 * p, p- (TD) / D, T being the total number of blocks of the storage device and D being the number of blocks accessible to a host device of the storage device for writing data into the storage device.
[0008]
8. Storage device according to claim 1, wherein the storage device comprises an electronic disk (SSD).
[0009]
A method comprising the steps of: determining, by a controller of a storage device, a respective padding percentage for each respective one of a plurality of blocks of the storage device; determining, by the controller, the lowest fill percentage for the plurality of respective fill percentages; and 24in response to determining that the least fill percentage exceeds a predetermined threshold value, performing, by the controller, an action relating to the good operating condition of the storage device.
[0010]
The method of claim 9, wherein the step of performing the action relating to the good state of operation of the storage device comprises: modifying an operating parameter of the controller.
[0011]
The method of claim 10, wherein the step of changing the operating parameter of the controller comprises: putting the storage device in a read-only state.
[0012]
The method of claim 9, wherein the step of performing the action relating to the good state of operation of the storage device comprises: providing an indication to a host device that causes the host device to modify at least one parameter of exploitation.
[0013]
The method of claim 9, further comprising, for each operation of recovering the respective memory space of a plurality of memory space recovery operations, the steps of: determining, by the controller, a percentage respective filling for each block of the plurality of blocks of the storage device; determining, by the controller, the least respective fill percentage for the plurality of respective fill percentages associated with the respective memory space recovery operation; determining, by the controller, a smaller average fill percentage based on the smaller respective fill percentages; and comparing, by the controller, the smallest average fill percentage to the predetermined threshold value.
[0014]
The method of claim 9, further comprising the step of: determining, by the controller, a smaller average fill percentage based on the smaller fill percentage and the smaller fill percentages for each of a plurality of memory space recovery operations previously performed, and wherein, in response to determining that the least fill percentage exceeds the predetermined threshold value, the step of performing the action relating to the good state operation of the storage device comprises: in response to determining that the least average filling percentage exceeds a predetermined threshold value, performing the action relating to the good operating condition of the storage device.
[0015]
15. The method according to claim 9, wherein the equal threshold value (W-1) / W, W = (1 + p) / 2 * p, p = (TD) / D, T being the total number of blocks. of the storage device and D being the number of blocks accessible to a host device of the storage device for writing data into the storage device.
[0016]
A computer readable storage medium comprising instructions that, when executed, configure one or more processors of a storage device to: determine a respective fill percentage for each respective one of a plurality of blocks of the device storage; determining the lowest fill percentage for the plurality of respective fill percentages; and in response to determining that the least fill percentage exceeds a predetermined threshold value, performing an action relating to the good operating condition of the storage device.
[0017]
The computer readable storage medium of claim 16, further comprising instructions that, when executed, configure one or more processors of the storage device to: determine a smaller average fill percentage based on the smaller Filling percentage and respective smaller fill percentages for each of a plurality of memory space recovery operations previously executed, and wherein the instructions which, when executed, configure the one or more processors of the storage device for, in response to determining that the least filling percentage exceeds the predetermined threshold value, performing the action relating to the good operating condition of the storage device comprises instructions which, when executed, configure one or more processors of the storage device for, e n responding to the determination that the least average filling percentage exceeds a predetermined threshold value, performing the action relating to the good operating condition of the storage device.
[0018]
The computer readable storage medium according to claim 16, further comprising instructions which, when executed, configure one or more processors of the storage device to: determine a respective fill percentage for each block of the plurality of blocks of the storage device; determining the least respective fill percentage for the plurality of respective fill percentages associated with the respective memory space retrieval operation; determining a smaller average fill percentage based on the smaller respective fill percentages; and comparing the lowest average fill percentage to the predetermined threshold value.
[0019]
A system comprising: means for determining a respective fill percentage for each respective one of a plurality of blocks of the storage device; means for determining the lowest fill percentage for the plurality of respective fill percentages; andmeans for executing, in response to determining that the least fill percentage exceeds a predetermined threshold value, an action relating to the good state of operation of the storage device.
[0020]
The system of claim 19, further comprising: means for determining a smaller average fill percentage based on the smaller fill percentage and smaller respective fill percentages for each of a plurality of fill operations. recovering the memory space executed previously, the execution means comprising means for executing, in response to the determination that the smallest average filling percentage exceeds a predetermined threshold value, the action relating to the good operating condition of the storage device. 28
类似技术:
公开号 | 公开日 | 专利标题
FR3026513A1|2016-04-01|
FR3028656A1|2016-05-20|
FR3026545A1|2016-04-01|
US9940261B2|2018-04-10|Zoning of logical to physical data address translation tables with parallelized log list replay
TWI416323B|2013-11-21|Method,system and semiconductor device for management workload
US20150199138A1|2015-07-16|Multi-tiered storage systems and methods for adaptive content streaming
US10318175B2|2019-06-11|SSD with heterogeneous NVM types
US8595451B2|2013-11-26|Managing a storage cache utilizing externally assigned cache priority tags
US8806165B2|2014-08-12|Mass-storage system utilizing auxiliary solid-state storage subsystem
US10019352B2|2018-07-10|Systems and methods for adaptive reserve storage
US10558395B2|2020-02-11|Memory system including a nonvolatile memory and a volatile memory, and processing method using the memory system
FR3023030B1|2019-10-18|INVALIDATION DATA AREA FOR CACHE
US9733833B2|2017-08-15|Selecting pages implementing leaf nodes and internal nodes of a data set index for reuse
CN105637470B|2020-09-29|Method and computing device for dirty data management
US10318205B2|2019-06-11|Managing data using a number of non-volatile memory arrays
KR102022721B1|2019-09-18|System for dynamically adaptive caching
KR101686346B1|2016-12-29|Cold data eviction method using node congestion probability for hdfs based on hybrid ssd
US9547460B2|2017-01-17|Method and system for improving cache performance of a redundant disk array controller
US9298397B2|2016-03-29|Nonvolatile storage thresholding for ultra-SSD, SSD, and HDD drive intermix
FR3020885A1|2015-11-13|
US20170052899A1|2017-02-23|Buffer cache device method for managing the same and applying system thereof
US10719243B2|2020-07-21|Techniques for preserving an expected lifespan of a non-volatile memory
US11176034B2|2021-11-16|System and method for inline tiering of write data
US8595433B2|2013-11-26|Systems and methods for managing destage conflicts
CN110502457B|2022-02-18|Metadata storage method and device
同族专利:
公开号 | 公开日
GB201516823D0|2015-11-04|
GB2531651B|2017-04-26|
DE102015012567A1|2016-03-31|
US20160092120A1|2016-03-31|
US10235056B2|2019-03-19|
CN105469829B|2020-02-11|
GB2531651A|2016-04-27|
FR3026513B1|2018-05-18|
CN105469829A|2016-04-06|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

US55455A|1866-06-12|Improvement in fastenings for fruit-boxes |
US30409A|1860-10-16|Chueh |
US6732203B2|2000-01-31|2004-05-04|Intel Corporation|Selectively multiplexing memory coupling global bus data bits to narrower functional unit coupling local bus|
US7136986B2|2002-11-29|2006-11-14|Ramos Technology Co., Ltd.|Apparatus and method for controlling flash memories|
US20050204187A1|2004-03-11|2005-09-15|Lee Charles C.|System and method for managing blocks in flash memory|
US7434011B2|2005-08-16|2008-10-07|International Business Machines Corporation|Apparatus, system, and method for modifying data storage configuration|
US7752391B2|2006-01-20|2010-07-06|Apple Inc.|Variable caching policy system and method|
US7512847B2|2006-02-10|2009-03-31|Sandisk Il Ltd.|Method for estimating and reporting the life expectancy of flash-disk memory|
US7653778B2|2006-05-08|2010-01-26|Siliconsystems, Inc.|Systems and methods for measuring the useful life of solid-state storage devices|
US8032724B1|2007-04-04|2011-10-04|Marvell International Ltd.|Demand-driven opportunistic garbage collection in memory components|
US8200904B2|2007-12-12|2012-06-12|Sandisk Il Ltd.|System and method for clearing data from a cache|
US8239611B2|2007-12-28|2012-08-07|Spansion Llc|Relocating data in a memory device|
US8078918B2|2008-02-07|2011-12-13|Siliconsystems, Inc.|Solid state storage subsystem that maintains and provides access to data reflective of a failure risk|
US8341331B2|2008-04-10|2012-12-25|Sandisk Il Ltd.|Method, apparatus and computer readable medium for storing data on a flash device using multiple writing modes|
US8140739B2|2008-08-08|2012-03-20|Imation Corp.|Flash memory based storage devices utilizing magnetoresistive random access memory to store files having logical block addresses stored in a write frequency file buffer table|
WO2010054410A2|2008-11-10|2010-05-14|Fusion Multisystems, Inc. |Apparatus, system, and method for predicting failures in solid-state storage|
CN102301339B|2009-04-21|2017-08-25|国际商业机器公司|Apparatus and method for controlling solid-state disk equipment|
US8176367B2|2009-05-28|2012-05-08|Agere Systems Inc.|Systems and methods for managing end of life in a solid state drive|
US8402242B2|2009-07-29|2013-03-19|International Business Machines Corporation|Write-erase endurance lifetime of memory storage devices|
US8463826B2|2009-09-03|2013-06-11|Apple Inc.|Incremental garbage collection for non-volatile memories|
US8489966B2|2010-01-08|2013-07-16|Ocz Technology Group Inc.|Solid-state mass storage device and method for failure anticipation|
EP2549482B1|2011-07-22|2018-05-23|SanDisk Technologies LLC|Apparatus, system and method for determining a configuration parameter for solid-state storage media|
US9026716B2|2010-05-12|2015-05-05|Western Digital Technologies, Inc.|System and method for managing garbage collection in solid-state memory|
US8719455B2|2010-06-28|2014-05-06|International Business Machines Corporation|DMA-based acceleration of command push buffer between host and target devices|
US8737136B2|2010-07-09|2014-05-27|Stec, Inc.|Apparatus and method for determining a read level of a memory cell based on cycle information|
US8949506B2|2010-07-30|2015-02-03|Apple Inc.|Initiating wear leveling for a non-volatile memory|
US8452911B2|2010-09-30|2013-05-28|Sandisk Technologies Inc.|Synchronized maintenance operations in a multi-bank storage system|
US8422303B2|2010-12-22|2013-04-16|HGST Netherlands B.V.|Early degradation detection in flash memory using test cells|
WO2012161659A1|2011-05-24|2012-11-29|Agency For Science, Technology And Research|A memory storage device, and a related zone-based block management and mapping method|
KR101907059B1|2011-12-21|2018-10-12|삼성전자 주식회사|Method for block management for non-volatile memory device and system for the same|
US20140018136A1|2012-07-16|2014-01-16|Yahoo! Inc.|Providing a real-world reward based on performance of action indicated by virtual card in an online card game|
US8862810B2|2012-09-27|2014-10-14|Arkologic Limited|Solid state device write operation management system|
US9141532B2|2012-12-26|2015-09-22|Western Digital Technologies, Inc.|Dynamic overprovisioning for data storage systems|
KR101854020B1|2012-12-31|2018-05-02|샌디스크 테크놀로지스 엘엘씨|Method and system for asynchronous die operations in a non-volatile memory|
CN103559115A|2013-09-29|2014-02-05|记忆科技有限公司|SSD intelligent monitoring system based on SMART|
US10380073B2|2013-11-04|2019-08-13|Falconstor, Inc.|Use of solid state storage devices and the like in data deduplication|
KR102157672B1|2013-11-15|2020-09-21|에스케이하이닉스 주식회사|Semiconductor apparatus and method of operating the same|US10078452B2|2014-06-12|2018-09-18|Hitachi Ltd.|Performance information management system, management computer, and performance information management method|
US10599352B2|2015-08-14|2020-03-24|Samsung Electronics Co., Ltd.|Online flash resource allocation manager based on a TCO model|
US10235198B2|2016-02-24|2019-03-19|Samsung Electronics Co., Ltd.|VM-aware FTL design for SR-IOV NVME SSD|
KR20180093152A|2017-02-09|2018-08-21|에스케이하이닉스 주식회사|Data storage device and operating method thereof|
US10564886B2|2018-02-20|2020-02-18|Western Digital Technologies, Inc.|Methods and apparatus for controlling flash translation layer recycle from host|
KR20200037642A|2018-10-01|2020-04-09|삼성전자주식회사|Memory controller and storage device including the same|
US11042432B1|2019-12-20|2021-06-22|Western Digital Technologies, Inc.|Data storage device with dynamic stripe length manager|
法律状态:
2016-09-21| PLFP| Fee payment|Year of fee payment: 2 |
2017-08-10| PLFP| Fee payment|Year of fee payment: 3 |
2017-08-11| PLSC| Search report ready|Effective date: 20170811 |
2018-08-13| PLFP| Fee payment|Year of fee payment: 4 |
2019-08-15| PLFP| Fee payment|Year of fee payment: 5 |
2020-04-24| TP| Transmission of property|Owner name: WESTERN DIGITAL TECHNOLOGIES, INC., US Effective date: 20200319 |
2020-08-12| PLFP| Fee payment|Year of fee payment: 6 |
2021-08-12| PLFP| Fee payment|Year of fee payment: 7 |
优先权:
申请号 | 申请日 | 专利标题
US14498331|2014-09-26|
US14/498,331|US10235056B2|2014-09-26|2014-09-26|Storage device health diagnosis|
[返回顶部]